摘要 :
The remote detection of liveness is critical for senior and baby care, disaster response, the military, and law enforcement. Existing solutions are mostly based on special sensor hardware or the spectral signature of living skin. ...
展开
The remote detection of liveness is critical for senior and baby care, disaster response, the military, and law enforcement. Existing solutions are mostly based on special sensor hardware or the spectral signature of living skin. This paper uses commercial electro-optical and infrared (EO/IR) sensors to capture a very short video for low cost and fast liveness detection. The key components of our system include: tiny human body and face detection from long range and low-resolution video, and remote liveness detection based on micro-motion from a short human body and face video. These micro-motions are caused by breathing and heartbeat. A deep learning architecture is designed for remote body and face detection. A novel algorithm is proposed for adaptive sensor and background noise cancellation. An air platform motion compensation algorithm is tested on video data collected on a drone. The key advantages are: low cost, requires very short video, works with many parts of a human body even when skin is not visible, works on any motion caused by eyes, mouth, heartbeat, breathing, or body parts, and works in all lighting conditions. To the author's best knowledge, this is the first work on video micro-motion based liveness detection on a moving platform and from a long standoff range of 100 m. Once a subject is deemed alive, video-based remote heart rate detection is applied to assess the physiological and psychological state of the subject. This is also the first work on outdoor remote heart rate detection from a long standoff range of 100 m. On a public available indoor COHFACE data evaluation, our heart rate estimation algorithm outperforms all published work on the same dataset.
收起
摘要 :
The remote detection of liveness is critical for senior and baby care, disaster response, the military, and law enforcement. Existing solutions are mostly based on special sensor hardware or the spectral signature of living skin. ...
展开
The remote detection of liveness is critical for senior and baby care, disaster response, the military, and law enforcement. Existing solutions are mostly based on special sensor hardware or the spectral signature of living skin. This paper uses commercial electro-optical and infrared (EO/IR) sensors to capture a very short video for low cost and fast liveness detection. The key components of our system include: tiny human body and face detection from long range and low-resolution video, and remote liveness detection based on micro-motion from a short human body and face video. These micro-motions are caused by breathing and heartbeat. A deep learning architecture is designed for remote body and face detection. A novel algorithm is proposed for adaptive sensor and background noise cancellation. An air platform motion compensation algorithm is tested on video data collected on a drone. The key advantages are: low cost, requires very short video, works with many parts of a human body even when skin is not visible, works on any motion caused by eyes, mouth, heartbeat, breathing, or body parts, and works in all lighting conditions. To the author's best knowledge, this is the first work on video micro-motion based liveness detection on a moving platform and from a long standoff range of 100 m. Once a subject is deemed alive, video-based remote heart rate detection is applied to assess the physiological and psychological state of the subject. This is also the first work on outdoor remote heart rate detection from a long standoff range of 100 m. On a public available indoor COHFACE data evaluation, our heart rate estimation algorithm outperforms all published work on the same dataset.
收起
摘要 :
Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile
applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning
implemente...
展开
Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile
applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning
implemented on mobile devices provides several advantages. These advantages include low communication bandwidth,
small cloud computing resource cost, quick response time, and improved data privacy. Research and development of
deep learning on mobile and embedded devices has recently attracted much attention. This paper provides a timely
review of this fast-paced field to give the researcher, engineer, practitioner, and graduate student a quick grasp on the
recent advancements of deep learning on mobile devices. In this paper, we discuss hardware architectures for mobile
deep learning, including Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuit (ASIC), and
recent mobile Graphic Processing Units (GPUs). We present Size, Weight, Area and Power (SWAP) considerations and
their relation to algorithm optimizations, such as quantization, pruning, compression, and approximations that simplify
computation while retaining performance accuracy. We cover existing systems and give a state-of-the-industry review of
TensorFlow, MXNet, Mobile AI Compute Engine (MACE), and Paddle-mobile deep learning platform. We discuss
resources for mobile deep learning practitioners, including tools, libraries, models, and performance benchmarks. We
present applications of various mobile sensing modalities to industries, ranging from robotics, healthcare and multimedia,
biometrics to autonomous drive and defense. We address the key deep learning challenges to overcome,
including low quality data, and small training/adaptation data sets. In addition, the review provides numerous citations
and links to existing code bases implementing various technologies. These resources lower the user’s barrier to entry
into the field of mobile deep learning.
收起
摘要 :
Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile
applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning
implemen...
展开
Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile
applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning
implemented on mobile devices provides several advantages. These advantages include low communication bandwidth,
small cloud computing resource cost, quick response time, and improved data privacy. Research and development of
deep learning on mobile and embedded devices has recently attracted much attention. This paper provides a timely
review of this fast-paced field to give the researcher, engineer, practitioner, and graduate student a quick grasp on the
recent advancements of deep learning on mobile devices. In this paper, we discuss hardware architectures for mobile
deep learning, including Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuit (ASIC), and
recent mobile Graphic Processing Units (GPUs). We present Size, Weight, Area and Power (SWAP) considerations and
their relation to algorithm optimizations, such as quantization, pruning, compression, and approximations that simplify
computation while retaining performance accuracy. We cover existing systems and give a state-of-the-industry review of
TensorFlow, MXNet, Mobile AI Compute Engine (MACE), and Paddle-mobile deep learning platform. We discuss
resources for mobile deep learning practitioners, including tools, libraries, models, and performance benchmarks. We
present applications of various mobile sensing modalities to industries, ranging from robotics, healthcare and multimedia,
biometrics to autonomous drive and defense. We address the key deep learning challenges to overcome,
including low quality data, and small training/adaptation data sets. In addition, the review provides numerous citations
and links to existing code bases implementing various technologies. These resources lower the user’s barrier to entry
into the field of mobile deep learning.
收起
摘要 :
Recent breakthroughs in EO/IR sensing, real-time signal processing, and deep machine learning technologies have enabled standoff heart rate estimation from facial and body video. This technology is also known as remote photoplethy...
展开
Recent breakthroughs in EO/IR sensing, real-time signal processing, and deep machine learning technologies have enabled standoff heart rate estimation from facial and body video. This technology is also known as remote photoplethys-mography (rPPG). Research and development of rPPG has attracted much attention recently. This paper gives a timely review of this fast-paced field to give the researcher, engineer, and graduate student a quick grasp of the recent advancement of rPPG. We first review two rPPG design approaches: color variation based and motion-based detections. To enable rPPG for less constrained use cases, various signal processing and machine learning algorithms are developed to handle signal variabilities introduced by lighting source, view angle, and subject motion. To help newcomers quickly start work in this field, we then describe some existing rPPG research datasets, open-source rPPG research tools, and some demonstration systems. Six commonly used rPPG algorithm evaluation metrics are described to evaluate and visualize the research advance in this field. As the rPPG technology matures, more application domains become possible. We cover six applications of rPPG in commercial, security, and defense domains, including emerging applications in bio-metric liveness and video media authenticity. Finally, we outline some challenges yet to overcome, especially in the domain of security and defense. These challenges include unconstrained outdoor environment, rPPG form air-platform, night time operation, moving and non-cooperative subjects. These challenges require special algorithmic considerations.
收起
摘要 :
Recent breakthroughs in EO/IR sensing, real-time signal processing, and deep machine learning technologies have enabled standoff heart rate estimation from facial and body video. This technology is also known as remote photoplethy...
展开
Recent breakthroughs in EO/IR sensing, real-time signal processing, and deep machine learning technologies have enabled standoff heart rate estimation from facial and body video. This technology is also known as remote photoplethys-mography (rPPG). Research and development of rPPG has attracted much attention recently. This paper gives a timely review of this fast-paced field to give the researcher, engineer, and graduate student a quick grasp of the recent advancement of rPPG. We first review two rPPG design approaches: color variation based and motion-based detections. To enable rPPG for less constrained use cases, various signal processing and machine learning algorithms are developed to handle signal variabilities introduced by lighting source, view angle, and subject motion. To help newcomers quickly start work in this field, we then describe some existing rPPG research datasets, open-source rPPG research tools, and some demonstration systems. Six commonly used rPPG algorithm evaluation metrics are described to evaluate and visualize the research advance in this field. As the rPPG technology matures, more application domains become possible. We cover six applications of rPPG in commercial, security, and defense domains, including emerging applications in bio-metric liveness and video media authenticity. Finally, we outline some challenges yet to overcome, especially in the domain of security and defense. These challenges include unconstrained outdoor environment, rPPG form air-platform, night time operation, moving and non-cooperative subjects. These challenges require special algorithmic considerations.
收起
摘要 :
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality. Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of...
展开
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality. Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of phase-distorted images. We propose an analog continuous time VLSI (very-large-scale-integration) spectrum analysis chip to provide such a real time image quality measurement. The chip takes the signal sensed by a photo detector which is located in the speckle field as analog input and computes its spectrum distribution continuously. Experiment and analysis on distorted laser beam was conducted with the analog spectrum analysis chip. Target-in-the-loop system is under development to demonstrate the capability of real time adaptive imaging.
收起
摘要 :
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality. Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of...
展开
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality. Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of phase-distorted images. We propose an analog continuous time VLSI (very-large-scale-integration) spectrum analysis chip to provide such a real time image quality measurement. The chip takes the signal sensed by a photo detector which is located in the speckle field as analog input and computes its spectrum distribution continuously. Experiment and analysis on distorted laser beam was conducted with the analog spectrum analysis chip. Target-in-the-loop system is under development to demonstrate the capability of real time adaptive imaging.
收起
摘要 :
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality. Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of...
展开
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality. Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of phase-distorted images. We propose an analog continuous time VLSI (very-large-scale-integration) spectrum analysis chip to provide such a real time image quality measurement. The chip takes the signal sensed by a photo detector which is located in the speckle field as analog input and computes its spectrum distribution continuously. Experiment and analysis on distorted laser beam was conducted with the analog spectrum analysis chip. Target-in-the-loop system is under development to demonstrate the capability of real time adaptive imaging.
收起
摘要 :
Accelerometers and gyroscopes embedded in mobile devices have shown great potential for non-obtrusive gait biometrics by directly capturing a user's characteristic locomotion. Despite the success in gait analysis under controlled ...
展开
Accelerometers and gyroscopes embedded in mobile devices have shown great potential for non-obtrusive gait biometrics by directly capturing a user's characteristic locomotion. Despite the success in gait analysis under controlled experimental settings using these sensors, their performance in realistic scenarios is unsatisfactory due to data dependency on sensor placement. In practice, the placement of mobile devices is unconstrained. In this paper, we propose a novel gait representation for accelerometer and gyroscope data which is both sensor-orientation-invariant and highly discriminative to enable high-performance gait biometrics for real-world applications. We also adopt the i-vector paradigm, a state-of-the-art machine learning technique widely used for speaker recognition, to extract gait identities using the proposed gait representation. Performance studies using both the naturalistic McGill University gait dataset and the Osaka University gait dataset containing 744 subjects have shown dominant superiority of this novel gait biometrics approach compared to existing methods.
收起